Convergence of Stochastic Proximal Gradient Algorithm

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ergodic convergence of a stochastic proximal point algorithm

The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates (xn) given by xn+1 = (I + λnA(ξn+1, . )) (xn) where (A(s, . ) : s ∈ E) is a collection of maximal monotone operators on a separable Hilbert space, (ξn) is an independent identically distributed sequence of random variables on E and (λn) is a positive sequence in l\l. The weighted average...

متن کامل

The proximal-proximal gradient algorithm

We consider the problem of minimizing a convex objective which is the sum of a smooth part, with Lipschitz continuous gradient, and a nonsmooth part. Inspired by various applications, we focus on the case when the nonsmooth part is a composition of a proper closed convex function P and a nonzero affine map, with the proximal mappings of τP , τ > 0, easy to compute. In this case, a direct applic...

متن کامل

On Stochastic Proximal Gradient Algorithms

We study a perturbed version of the proximal gradient algorithm for which the gradient is not known in closed form and should be approximated. We address the convergence and derive a non-asymptotic bound on the convergence rate for the perturbed proximal gradient, a perturbed averaged version of the proximal gradient algorithm and a perturbed version of the fast iterative shrinkagethresholding ...

متن کامل

Convergence of Proximal-Gradient Stochastic Variational Inference under Non-Decreasing Step-Size Sequence

Several recent works have explored stochastic gradient methods for variational inference that exploit the geometry of the variational-parameter space. However, the theoretical properties of these methods are not well-understood and these methods typically only apply to conditionallyconjugate models. We present a new stochastic method for variational inference which exploits the geometry of the ...

متن کامل

Convergence of Stochastic Gradient Descent for PCA

We consider the problem of principal component analysis (PCA) in a streaming stochastic setting, where our goal is to find a direction of approximate maximal variance, based on a stream of i.i.d. data points in R. A simple and computationally cheap algorithm for this is stochastic gradient descent (SGD), which incrementally updates its estimate based on each new data point. However, due to the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied Mathematics & Optimization

سال: 2019

ISSN: 0095-4616,1432-0606

DOI: 10.1007/s00245-019-09617-7